Hour to hour syllabus for Math 21 b , Fall 2010

نویسنده

  • Oliver Knill
چکیده

Here is a brief outline of the lectures for the fall 2010 semester. The section numbers refer to the book of Otto Bretscher, Linear algebra with applications. 1. Week: Systems of Linear Equations and Gauss-Jordan 1. Lecture: Introduction to linear systems, Section 1.1, September 8, 2010 A central point of this week is Gauss-Jordan elimination. While the precise procedure will be introduced in the second lecture, we learn in this first hour what a system of linear equations is by looking at examples of systems of linear equations. The aim is to illustrate, where such systems can occur and how one could solve them with ’ad hoc’ methods. This involves solving equations by combining equations in a clever way or to eliminate variables until only one variable is left. We see examples with no solution, several solutions or exactly one solution. 2. Lecture: Gauss-Jordan elimination, Section 1.2, September 10,2010 We rewrite systems of linear equations using matrices and introduce Gauss-Jordan elimination steps: scaling of rows, swapping rows or subtract a multiple of one row to an other row. We also see an example, where one has not only one solution or no solution. Unlike in multi-variable calculus, we distinguish between column vectors and row vectors. Column vectors are n× 1 matrices, and row vectors are 1×m matrices. A general n×m matrix has m columns and n rows. The output of Gauss-Jordan elimination is a matrix rref(A) which is in row reduced echelon form: the first nonzero entry in each row is 1, called leading 1, every column with a leading 1 has no other nonzero elements and every row above a row with a leading 1 has a leading 1 to the left. 2. Week: Linear Transformations and Geometry 3. Lecture: On solutions of linear systems, Section 1.3, September 13,2010 How many solutions does a system of linear equations have? The goal of this lecture is to see that there are three possibilities: exactly one solution, no solution or infinitely many solutions. This can be visualized and explained geometrically in low dimensions. We also learn to determine which case we are in using Gauss-Jordan elimination by looking at the rank of the matrix A as well as the augmented matrix [A|b]. We also mention that one can see a system of linear equations Ax = b in two different ways: the column picture tells that b = x1v1+ · · ·+xnvn is a sum of column vectors vi of the matrix A, the row picture tells that the dot product of the row vectors wj with x are the components wj · x = bj of b. 4. Lecture: Linear transformation, Section 2.1, September 15,2010 1 This week provides a link between the geometric and algebraic description of linear transformations. Linear transformations are introduced formally as transformations T (x) = Ax, where A is a matrix. We learn how to distinguish between linear and nonlinear, linear and affine transformations. The transformation T (x) = x + 5 for example is not linear because 0 is not mapped to 0. We characterize linear transformations on R by three properties: T (0) = 0, T (x+ y) = T (x)+T (y) and T (sx) = sT (x), which means compatibility with the additive structure on R. 5. Lecture: Linear transformations in geometry, Section 2.2, September 17,2010 We look at examples of rotations, dilations, projections, reflections, rotation-dilations or shears. How are these transformations described algebraically? The main point is to see how to go forth and back between algebraic and geometric description. The key fact is that the column vectors vj of a matrix are the images vj = Tej of the basis vectors ej. We derive for each of the mentioned geometric transformations the matrix form. Any of them will be important throughout the course. 3. Week: Matrix Algebra and Linear Subspaces 6. Lecture: Matrix product, Section 2.3, September 20, 2010 The composition of linear transformations leads to the product of matrices. The inverse of a transformation is described by the inverse of the matrix. Square matrices can be treated in a similar way as numbers: we can add them, multiply them with scalars and many matrices have inverses. There is two things to be careful about: the product of two matrices is not commutative and many nonzero matrices have no inverse. If we take the product of a n × p matrix with a p ×m matrix, we obtain a n ×m matrix. The dot product as a special case of a matrix product between a 1× n matrix and a n× 1 matrix. It produces a 1× 1 matrix, a scalar. 7. Lecture: The inverse, Section 2.4, September 22, 2010 We first look at invertibility of maps f : X → X in general and then focus on the case of linear maps. If a linear map R to R is invertible, how do we find the inverse? We look at examples when this is the case. Finding x such that Ax = y is equivalent to solving a system of linear equations. Doing this in parallel gives us an elegant algorithm by row reducing the matrix [A|1n] to end up with [1n|A]. We also might have time to see how upper triangular block matrices [ A B 0 C ] have the inverse [ A −A−1BC−1 0 C ] . 8. Lecture: Image and kernel, Section 3.1, September 24, 2010 We define the notion of a linear subspace of n-dimensional space and the span of a set of vectors. This is a preparation for the more abstract definition of linear spaces which appear later in the course. The main algorithm is the computation of the kernel and the image of a linear transformation using row reduction. The image of a matrix A is spanned by the columns of A which have a leading 1 in rref(A). The kernel of a matrix A is parametrized by ”free variables”, the variables for which there is no leading 1 in rref(A). For a n × n matrix, the kernel is trivial if and only if the matrix is invertible. The kernel is always nontrivial if the n×m matrix satisfies m > n, that is if there are more variables than equations. 4. Week: Basis and Dimension 9. Lecture: Basis and linear independence, Section 3.2, September 27, 2010 2 With the previously defined ”span” and the newly introduced linear independence, one can define what a basis for a linear space is. It is a set of vectors which span the space and which are linear independent. The standard basis in R is an example of a basis. We show that if we have a basis, then every vector can be uniquely represented as a linear combination of basis elements. A typical task is to find the basis of the kernel and the basis for the image of a linear transformation. 10. Lecture: Dimension, Section 3.3, September 29, 2010 The concept of abstract linear spaces allows to introduce linear spaces of functions. This will be useful for applications in differential equations. We show first that the number of basis elements is independent of the basis. This number is called the dimension. The proof uses that if p vectors are linearly independent and q vectors span a linear subspace V , then p is less or equal to q. We see the rank-nullety theorem: dimker(A) + dimim(A) is the number of columns of A. Even so the result is not very deep, it is sometimes referred to as the fundamental theorem of linear algebra. It will turn out to be quite useful for us, for example, when looking under the hood of data fitting. 11. Lecture: Change of coordinates, Section 3.4, October 1, 2010 Switching to a different basis can be useful for certain problems. For example to find the matrix of the reflection at a line or projection onto a plane, one can first find the matrix B in a suitable basis B = {v1, v2, v3 }, then use B = SAS to get A. The matrix S contains the basis vectors in the columns. We also learn how to express a matrix in a new basis Sei = vi. We derive the formula B = SAS . 5. Week: Linear Spaces and Orthogonality 12. Lecture: Linear spaces, Section 4.1, October 4, 2010 In this lecture we generalize the concept of linear subspaces of R and consider abstract linear spaces. An abstract linear space is a set X closed under addition and scalar multiplication and which contains 0. We look at many examples. An important one is the space X = C([a, b]) of continuous functions on the interval [a, b] or the space P5 of polynomials of degree smaller or equal to 5, or the linear space of all 3× 3 matrices. 13. Lecture: Review for the second midterm, October 6, 2010 This is review for the first midterm on October 7th. The plenary review will cover all the material, so that this review can focus on questions or looking at some True/False problems or practice exam problems. 14. Lecture: orthonormal bases and projections, Section 5.1, October 8, 2010 We review orthogonality between vectors u,v by u · v = 0 and define orthonormal basis, a basis which consists of unit vectors which are all orthogonal to each other. The orthogonal complement of a linear space V in R is defined the set of all vectors perpendicular to all vectors in V . It can be found as a kernel of the matrix which contains a basis of V as rows. We then define orthogonal projection onto a linear subspace V . Given an orthonormal basis {u1, . . . , un } in V , we have a formula for the orthogonal projection: P (x) = (u1 ·x)u1+ . . .+(un ·x)un. This simple formula for a projection only holds if we are given an orthonormal basis in the subspace V . We mention already that this formula can be written as P = AA where A is the matrix which contains the orthonormal basis as columns. 6. Week: Gram-Schmidt and Projection 3 Monday is Columbus Day and no lectures take place. 15. Lecture: Gram-Schmidt and QR factorization, Section 5.2, October 13, 2010 The Gram Schmidt orthogonalization process lead to the QR factorization of a matrix A. We will look at this process geometrically as well as algebraically. The geometric process of ”straightening out” and ”adjusting length” can be illustrated well in 2 and 3 dimensions. Once the formulas for the orthonormal vectors wj from a given set of vectors vj are derived, one can rewrite it in matrix form. If the vj are the m columns of a n×m matrix A and wj the columns of a n×m matrix Q, then A = QR, where R is a m×m matrix. This is the QR factorization. The QR factorization has its use in numerical methods. 16. Lecture: Orthogonal transformations, Section 5.3, October 15, 2010 We first define the transpose A of a matrix A. Orthogonal matrices are defined as matrices for which AA = 1n. This is equivalent to the fact that the transformation T defined by A preserves angles and lengths. Rotations and reflections are examples of orthogonal transformations. We point out the difference between orthogonal projections and orthogonal transformations. The identity matrix is the only orthogonal matrix which is also an orthogonal projection. We also stress that the notion of orthogonal matrix only applies to n× n matrices and that the column vectors form an orthonormal basis. A matrix A for which all columns are orthonormal is not orthogonal if the number or rows is not equal to the number of columns. 7. Week: Data fitting and Determinants 17. Lecture: Least squares and data fitting, Section 5.4, October 18, 2010 This is an important lecture from the application point of view. It covers a part of statistics. We learn how to fit data points with any finite set of functions. To do so, we write the fitting problem as a in general overdetermined system of linear equations Ax = b and find from this the least square solution x∗ which has geometrically the property that Ax∗ is the projection of b onto the image of A. Because this means A (Ax∗ − b) = 0, we get the formula x∗ = (A A)Ab . An example is to fit a set of data (xi, yi) by linear functions {f1, ....fn}. This is very powerful. We can fit by any type of functions, even functions of several variables. 18. Lecture: Determinants I, Section 6.1, October 20, 2010 We define the determinant of a n×n matrix using the permutation definition. This immediately implies the Laplace expansion formula and allows comfortably to derive all the properties of determinants from the original definition. In this lecture students learn about permutations in terms of patterns. There is no need to talk about permutations and signatures. The equivalent language of ”patterns” and ”number of ”upcrossings”. In this lecture, we see the definition of determinants in all dimensions, see how it fits with 2 and 3 dimensional case. We practice already Laplace expansion to compute determinants. 19. Lecture: Determinants II, Section 6.2, October 22, 2010 We learn about the linearity property of determinants and how Gauss-Jordan elimination allows a fast computation of determinants. The computation of determinants by Gauss-Jordan elimination is quite efficient. Often we can see the determinant already after a few steps because the matrix has become upper triangular. We also point out how to compute determinants for partitioned matrices. We do lots of examples, also harder examples in which we learn how to decide which of the methods to use: permutation method, Laplace expansion, row reduction to a triangular case or using partitioned matrices. 4 8. Week: Eigenvectors and Diagonalization 20. Lecture: Eigenvalues, Section 7.1-2, October 25, 2010 Eigenvalues and eigenvectors are introduced in this lecture. It is good to see them first in concrete examples like rotations, reflections, shears. As the book, we can motivate the concept using discrete dynamical systems, like the problem to find the growth rate of the Fibonacci sequence. Here it becomes evident, why computing eigenvalues and eigenvectors is useful. 21. Lecture: Eigenvectors, Section 7.3, October 27, 2010 This lecture focuses on eigenvectors. Computing eigenvectors relates to the computation of the kernel of a linear transformation. We give also a geometric idea what eigenvectors are and look at lots of examples. A good class of examples are Markov matrices, which are important from the application point of view. Markov matrices always have an eigenvalue 1 because the transpose has an eigenvector [1, 1, . . .1] . The eigenvector of A to the eigenvalue 1 has significance. It belongs to a stationary probability distribution. 22. Lecture: Diagonalization, Section 7.4, October 29, 2010 A major result of this section is that if all eigenvalues of a matrix are different, one can diagonalize the matrix A. There is an eigenbasis. We also see that if the eigenvalues are the same, like for the shear matrix, one can not diagonalize A. If the eigenvalues are complex like for a rotation, one can not diagonalize over the reals. Since we like to able to diagonalize in as many situations as possible, we allow complex eigenvalues from now on. 9. Week: Stability of Systems and Symmetric Matrices 23. Lecture: Complex eigenvalues, Section 7.5, November 1, 2010 We start with a short review on complex numbers. Course assistants will do more to get the class up to speed with complex numbers. The fundamental theorem of algebra assures that a polynomial of degree n has n solutions, when counted with multiplicities. We express the determinant and trace of a matrix in terms of eigenvalues. Unlike in the real case, these formulas hold for any matrix. 24. Lecture: Review for second midterm, November 3, 2010 We review for the second midterm in section. Since there was a plenary review for all students covering the theory, one could focus on student questions and see the big picture or discuss some True/False problems or practice exam problems. 25. Lecture: Stability, Section 7.6, November 5, 2010 We study the stability problem for discrete dynamical systems. The absolute value of the eigenvalues determines the stability of the transformation. If all eigenvalues are in absolute value smaller than 1, then the origin is asymptotically stable. A good example to discuss is the case, when the matrix is not diagonalizable, like for example for a shear dilation S = [ 0.99 1000 0 0.99 ] , where the expansion by the off diagonal shear competes with the contraction in the diagonal. 5 10. Week: Homogeneous Ordinary Differential Equations 26. Lecture: Symmetric matrices, Section 8.1, November 8, 2010 The main point of this lecture is to see that symmetric matrices can be diagonalized. The key fact is that the eigenvectors of a symmetric matrix are perpendicular to each other. An intuitive proof of the spectral theorem can be given in class: after a small perturbation of the matrix all eigenvalues are different and diagonalization is possible. When making the perturbation smaller and smaller, the eigenspaces stay perpendicular and in particular linearly independent. The shear is the prototype of a matrix, which can not be diagonalized. This lecture also gives plenty of opportunity to practice finding an eigenbasis and possibly for Gramm-Schmidt, if an orthonormal eigenbasis needs to be found in a higher dimensional eignspace. 27. Lecture: Differential equations I, Section 9.1, November 10, 2010 We learn to solve systems of linear differential equations by diagonalization. We discuss linear stability of the origin. Unlike in the discrete time case, where the absolute value of the eigenvalues mattered, the real part of the eigenvalues is now important. We also keep in mind the one dimensional case, where these facts are obvious. The point is that linear algebra allows to reduce the higher dimensional case to the one-dimensional case. 28. Lecture: Differential equations II, Section 9.2, November 12, 2010 A second lecture is necessary for the important topic of applying linear algebra to solve differential equations x = Ax, where A is a n × n matrix. While the central idea is to diagonalize A and solve y = By, where B is diagonal, we can do so a bit faster. Write the initial condition x(0) as a linear combination of eigenvectors x(0) = a1v1+. . .+anvn and get x(t) = a1v1e 1+. . .+anvne n. We also look at examples where the eigenvalues λ1 of the matrix A are complex. An important case for the later is the harmonic oscillator with and without damping. There would be many more interesting examples from physics. 11. Week: Nonlinear Differential Equations, Function spaces 29. Lecture: Nonlinear systems, Section 9.4, November 15, 2010 This section is covered by a separate handout numbered Section 9.4. How can nonlinear differential equations in two dimensions ẋ = f(x, y), ẏ = g(x, y) be analyzed using linear algebra? The key concepts are finding null clines, equilibria and their nature using linearization of the system near the equilibria by computing the Jacobean matrix. Good examples are competing species systems like the example of Murray, predator-pray examples like the Volterra system or mechanical systems like the pendulum. 30. Lecture: Linear operators, Section 4.2, November 17, 2010 We study linear operators on linear spaces. The main example is the operator Df = f ′ as well as polynomials of the operator D like D + D + 1. Other examples T (f) = xf, T (f)(x) = x + 3 (which is not linear) or T (f)(x) = xf(x) which is linear. The goal is of this lecture is to get ready to understand that solutions of differential equations are kernels of linear operators or write partial differential equations in the form ut = T (u), where T is a linear operator. 31. Lecture: Linear differential operators, Section 9.3, November 19, 2010 6 The main goal is to be able to solve linear higher order differential equations p(D) = g using the operator method. The method generalizes the integration process which we use to solve for examples like f ′′′ = sin(x) where three fold integration leads to the general solution f . For a problem p(D) = g we factor the polynomial p(D) = (D− a1)(D− a2)...(D− an) into linear parts and invert each linear factor (D− ai) using an integration factor. This operator method is very general and always works. It also provides us with a justification for a more convenient way to find solutions. 12. Week: Inner product Spaces and Fourier Theory 32. Lecture: inhomogeneous differential equations, Handout, November 22, 2010 This operator method to solve differential equations p(D)f = g works unconditionally. It allows to put together a ”cookbook method”, which describes, how to find the special solution of the inhomogeneous problem by first finding the general solution to the homogeneous equation and then finding a special solution. Very important cases are the situation ẋ− ax = g(x), the driven harmonic oscillator ẍ+ cx = g(x) or the driven damped harmonic oscillator ẍ+ bẋ+ cx = g(x) Special care has to be taken if g(x) is in the kernel of p(D) or if the polynomial p has repeated roots. 33. Lecture: Inner product spaces, Section 5.5, November 24, 2010 As a preparation for Fourier theory we introduce inner products in linear spaces. It generalizes the dot product. For 2π-periodic functions, one takes 〈f, g〉 as the integral of f g̈ from −π to π and divide by 2π. It has all the properties we know from the dot product in finite dimensions. An other example of an inner product on matrices is 〈A,B〉 = tr(AB). Most of the geometry we did before can now be done in a larger context. Examples are Gram-Schmidt orthogonalization, projections, reflections, have the concept of coordinates in a basis, orthogonal transformations etc. 13. Week: Fourier Series and Partial differential equations 34. Lecture: Fourier series, Handout, November 29, 2010 The expansion of a function with respect to the orthonormal basis 1/ √ 2, cos(nx), sin(nx) leads to the Fourier expansion f(x) = a0(1/ √ 2) + ∞ ∑ n=1 an cos(nx) + bn sin(nx) . A nice example to see how Fourier theory is useful is to derive the Leibniz series for π/4 by writing x = ∞ ∑ k=1 2 (−1)k+1 k sin(kx) and evaluate it at π/2. The main motivation is that the Fourier basis is an orthonormal eigenbasis to the operator D. It diagonalizes this operator because D sin(nx) = −n2 sin(nx), D cos(nx) = −n2 cos(nx). We will use this to solve partial differential equations in the same way as we solved ordinary differential equations. 7 35. Lecture: Parseval’s identity, Handout, December 1, 2010 Parseval’s identity ||f || = a20 + ∞ ∑ n=1 a2n + b 2 n . is the ”Pythagorean theorem” for function spaces. It is useful to estimate how fast a finite sum converges. We mention also applications like computations of series by the Parseval’s identity or by relating them to a Fourier series. Nice examples are computations of ζ(2) or ζ(4) using the Parseval’s identity. 36. Lecture: Partial differential equations, Handout, December 3, 2010 Linear partial differential equations ut = p(D)u or utt = p(D)u with a polynomial p are solved in the same way as ordinary differential equations: by diagonalization. Fourier theory achieves that the ”matrix” D is diagonalized and so the polynomial p(D). This is much more powerful than the separation of variable method, which we do not do in this course. For example, the partial differential equation utt = uxx − uxxxx + 10u can be solved nicely with Fourier in the same way as we solve the wave equation. The method allows even to solve partial differential equations with a driving force like for example utt = uxx − u+ sin(t) .

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

بررسی ارتباط مقادیر پروتئین ادرار 4 و 24 ساعته در زنان با تشخیص اولیه اختلالات هیپرتانسیو حاملگی

Received: 21 April, 2009 Accepted: 21 Oct, 2009 Abstract Background & Aims: Proteinuria is one of the fundamental criteria for diagnosis of pre-eclampsia with quantative assessment based on twenty-four-hour urine protein estimation as the gold standard. The purpose of this study is to determine the correlation between four and twenty-four-hour urine total protein values to confirm the diagnos...

متن کامل

Patient falls: Association with hospital Magnet status and nursing unit staffing.

The relationships between hospital Magnet® status, nursing unit staffing, and patient falls were examined in a cross-sectional study using 2004 National Database of Nursing Quality Indicators (NDNQI®) data from 5,388 units in 108 Magnet and 528 non-Magnet hospitals. In multivariate models, the fall rate was 5% lower in Magnet than non-Magnet hospitals. An additional registered nurse (RN) hour p...

متن کامل

Mandibular advancement titration for obstructive sleep apnea: optimization of the procedure by combining clinical and oximetric parameters.

BACKGROUND Oral appliances (OAs) have been used for the treatment of obstructive sleep apnea syndrome (OSAS), with different degrees of effectiveness having been shown in previous studies. But, in the absence of a consensual recommendation, the method of the determination of effective mandibular advancement varies from one study to another. STUDY OBJECTIVE We prospectively evaluated an OA tit...

متن کامل

بررسی اثر طب سوزنی بر شدت درد زایمان

Introduction: Fear has caused increase in the caesarean rate. Taking into consideration the numerous maternal and fetus complications of pharmacological analgesia method, mothers tend to use non-pharmacological methods. One of these methods to relieve labor pain is acupuncture. Objective: In this study the effect of acupuncture on labor pain was investigated. Materials & Methods: Women were d...

متن کامل

Ovalbumin-Based Porous Scaffolds for Bone Tissue Regeneration

Cell differentiation on glutaraldehyde cross-linked ovalbumin scaffolds was the main focus of this research. Salt leaching and freeze drying were used to create a three-dimensional porous structure. Average pore size was 147.84 ± 40.36 μm and 111.79 ± 30.71 μm for surface and cross sectional area, respectively. Wet compressive strength and elastic modulus were 6.8 ± 3.6 kPa. Average glass trans...

متن کامل

COMPARATIVE EVALUATION OF QUANTITATIVE PROTEIN MEASUREMENTS IN 8-12 AND 24 HOUR URINE SAMPLES FOR DIAGNOSIS OF PRE-ECLAMPSIA

 ABSTRACT Background: In this study our purpose was to determine whether 8 and/or 12 hour urine total protein values correlate with the 24 hour value to confirm the diagnosis of pre-eclampsia. Methods: The study population included 70 inpatients with hypertensive disorders of pregnancy. Patient's urine was collected over 24 hours with the first 8 hours, next 4 hours, and remaining 12 hours in s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011